On-line Addendum to Sequential Inductive Learning
نویسنده
چکیده
2. McSPRT This section summarizes the McSPRT algorithm for solving correlated selection problems. This section is brief. A more detailed discussion appears in [Gratch94]. McSPRT stands for “Multiple Comparison Sequential Probability Ratio Test.” A sequential procedure [Govindarjulu81] is one that draws data a little at a time until enough has been taken to make a statistical decision of some pre-specified quality. Sequential procedures tend to be more efficient than fixed sized procedures. The sequential probability ratio test (SPRT) is a sequential procedure which can be used (among other things) to decide the sign of the expected value of a distribution. A multiple comparison procedure [Hochberg87] is a statistical procedure that makes some global decision based on many separate decisions (called comparisons). McSPRT is a multiple comparison procedure for finding a treatment with lowest (or highest) expected utility. This is decided by comparing the differences in expected utility between treatments. In particular, after each training example, the treatment with lowest estimated utility is compared pair-wise with each other treatment. If the difference in expected value between the best and an alternative is significantly negative (as decided by SPRT), the alternative is discarded.
منابع مشابه
Sequential Inductive Learning
In this paper I advocate a new model for inductive learning. Called sequential induction, this model bridges classical fixed-sample learning techniques (which are efficient but ad hoc), and worst-case approaches (which provide strong statistical guarantees but are too inefficient for practical use). According to the sequential inductive model, learning is a sequence of decisions which are infor...
متن کاملInductive Learning in Less Than One Sequential Data Scan
Most recent research of scalable inductive learning on very large dataset, decision tree construction in particular, focuses on eliminating memory constraints and reducing the number of sequential data scans. However, state-of-the-art decision tree construction algorithms still require multiple scans over the data set and use sophisticated control mechanisms and data structures. We first discus...
متن کاملScaling Up Inductive Learning with MassiveParallelismFOSTER
Machine learning programs need to scale up to very large data sets for several reasons, including increasing accuracy and discovering infrequent special cases. Current inductive learners perform well with hundreds or thousands of training examples, but in some cases, up to a million or more examples may be necessary to learn important special cases with conndence. These tasks are infeasible for...
متن کاملThe Task Rehearsal Method of Sequential Learning
An hypothesis of functional transfer of task knowledge is presented that requires the development of a measure of task relatedness and a method of sequential learning. The task rehearsal method (TRM) is introduced to address the issues of sequential learning, namely retention and transfer of knowledge. TRM is a knowledge based inductive learning system that uses functional domain knowledge as a...
متن کامل3 Addendum to “ Groups of Ribbon Knots ”
The purpose of this document is to clarify the inductive step described in the proof of Theorem 3.2 in [2]. In the second last sentence of the proof, it says, ‘it follows from the inductive proof that Rn is of index two.’ The question of how this assertion is verified was first raised by Dror Bar Natan and his student Ofer Ron [1]. To avoid any confusions that may arise in the future, the autho...
متن کامل